perm filename NATURA[S80,JMC] blob
sn#510793 filedate 1980-05-13 generic text, type C, neo UTF8
COMMENT ā VALID 00002 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 Natural Kinds
C00006 ENDMK
Cā;
Natural Kinds
The recent philosophical discussions of "natural kinds" are
relevant to AI. We introduce the issue by asking: %2"What is
the definition of a lemon?"%1. One is naively inclined to begin
something like: %2"A lemon is a yellow fruit, spheroidal in in
shape with a tart taste and usually about one to two inches in
diameter, etc."%1. But suppose some smart geneticist or smart
plant nutritionist caused a lemon tree to produce a blue fruit.
Wouldn't it be a blue lemon?
In fact, suppose I tell you that Luther Burbank developed a blue
lemon, but it never became popular.
The answer currently popular among
philosophers and due to Putnam is that lemons form a "natural kind"
with many common properties not all of which are known to a
given speaker and some are not known by anyone. The speaker
can show you a lemon or tell you how you can probably distinguish them
from other objects in the local supermarket, but he won't even
try to give you a rule for distinguishing them from any conceivable
counterfeit. Science or experience can tell more and more about
lemons but can't define them completely. It is important that
the lemons we see don't grade off into other fruit like colors do.
We should contrast the case of lemons from that of a mountain
or the color red where they do grade off. There is no known
precise line between mountains and lesser eminences, and we don't
expect science to provide us with one.
Another case worth considering is that of the word "bachelor"
in the sense of adult man who has never been married. Here we
have a conventional definition using words assumed already known.
What has this recent philosophical discovery to do with
AI?
It seems to me that an intelligent computer program will
have to treat predicates in its internal language in a way that
provides for all three cases (and maybe others) and which perhaps
knows the difference between them. I say "perhaps" because philosophers
didn't clearly distinguish them until recently, and we might be
quite satisfied with an intelligent machine even if it didn't
have the last word in philosophy.
To summarize, the machine must admit at least the following
kinds of terms:
1. Conventional terms like "bachelor". What they mean
is entirely determined by their definitions.